Goto

Collaborating Authors

 time and effort


9ec51f6eb240fb631a35864e13737bca-AuthorFeedback.pdf

Neural Information Processing Systems

We will respectfully disagree that it "makes more sense to take a decaying step-size." Indeed, the improved error bound of DTDT comes at the price of doubling the commu-45 nication requirements of DTDL. Codes for49 the presented experiments are straightforward, so they were not included.


We thank all reviewers for their time and effort in reviewing our paper

Neural Information Processing Systems

We thank all reviewers for their time and effort in reviewing our paper. We set up experiments on PyTorch with ResNet18 (He et al., 2016) on CIFAR10 (Krizhevsky, 2009). Figure 1: Evaluations on CIFAR10: training loss ( 1 st column), test accuracy ( 2 nd column) and total number of transmitted bits. MB) on CIFAR10 are shown in Figure 1 above. We will release our code on GitHub in the final version.


like ours there are subtleties, and highly appreciate the time and effort that the reviewers are putting in to digest these

Neural Information Processing Systems

We would like to thank the reviewers for their comments and feedback. Janzing et al. [9] write down the same equation, but We will follow the reviewer's The decomposition for conditional SVs follows by replacing "conditioning The decomposition is introduced in Section 3 to assist our illustration of how the different SVs attribute a model's SVs. Unlike conditional (asymmetric) SVs, causal SVs provide the right intuition in the case of common confounding. See also the previous paragraph. SVs appear to fare better than the reviewer suggests.


O-Forge: An LLM + Computer Algebra Framework for Asymptotic Analysis

Khaitan, Ayush, Ganesh, Vijay

arXiv.org Artificial Intelligence

Large language models have recently demonstrated advanced capabilities in solving IMO and Putnam problems; yet their role in research mathematics has remained fairly limited. The key difficulty is verification: suggested proofs may look plausible, but cannot be trusted without rigorous checking. We present a framework, called LLM+CAS, and an associated tool, O-Forge, that couples frontier LLMs with a computer algebra systems (CAS) in an In-Context Symbolic Feedback loop to produce proofs that are both creative and symbolically verified. Our focus is on asymptotic inequalities, a topic that often involves difficult proofs and appropriate decomposition of the domain into the "right" subdomains. Many mathematicians, including Terry Tao, have suggested that using AI tools to find the right decompositions can be very useful for research-level asymptotic analysis. In this paper, we show that our framework LLM+CAS turns out to be remarkably effective at proposing such decompositions via a combination of a frontier LLM and a CAS. More precisely, we use an LLM to suggest domain decomposition, and a CAS (such as Mathematica) that provides a verification of each piece axiomatically. Using this loop, we answer a question posed by Terence Tao: whether LLMs coupled with a verifier can be used to help prove intricate asymptotic inequalities. More broadly, we show how AI can move beyond contest math towards research-level tools for professional mathematicians.



First of all, we wish to sincerely thank the anonymous reviewers for their time and efforts in reviewing our NeurIPS

Neural Information Processing Systems

In the revised version, we will make this clearer in the "Related Work" section. Figure 2 illustrates how the classification model (i.e. ( t 1) The parameter σ corresponds to the width of Gaussian kernel, which is fixed to be 1 in this paper (pp.3, footnote 1).


We thank all the reviewers for their time and effort to evaluate our paper

Neural Information Processing Systems

We thank all the reviewers for their time and effort to evaluate our paper. We will discuss the assumptions in the context of the SDE in Eq. 4, which is our main The boundedness assumption requires the solutions to the SDE to be'non-explosive' in the sense that for On the other hand, in Thm.S3, we already proved a bound which does not require H4, but This assumption is common in statistics (see [Bra83]) but hard to verify in practice. We agree that Prop.1 can be difficult to grasp at a first sight; hence, we provided BN is not fully understood. Our main focus is the relationship between generalization and intrinsic dimensionality. Relationship between BN and dimensionality is an interesting future direction which would complement our work.


like ours there are subtleties, and highly appreciate the time and effort that the reviewers are putting in to digest these

Neural Information Processing Systems

We would like to thank the reviewers for their comments and feedback. Janzing et al. [9] write down the same equation, but We will follow the reviewer's The decomposition for conditional SVs follows by replacing "conditioning The decomposition is introduced in Section 3 to assist our illustration of how the different SVs attribute a model's SVs. Unlike conditional (asymmetric) SVs, causal SVs provide the right intuition in the case of common confounding. See also the previous paragraph. SVs appear to fare better than the reviewer suggests.



The pros and cons of not raking leaves

Popular Science

Your local beetles and chipmunks will thank you for backing away from the rake. Breakthroughs, discoveries, and DIY tips sent every weekday. There's the shift to cooler weather and the opportunity to pull out your favorite hoodie and enjoy the colorful symphony of fall leaves. However, many do not look forward to raking leaves. Believe it or not, you do have a choice when it comes to whether or not to rake leaves.